The task of referring video object segmentation aims to segment the object in the frames of a given video to which the referring expressions refer. Previous methods adopt multi-stage approach and design complex pipelines to obtain promising results. Recently, the end-to-end method based on Transformer has proved its superiority. In this work, we draw on the advantages of the above methods to provide a simple and effective pipeline for RVOS. Firstly, We improve the state-of-the-art one-stage method ReferFormer to obtain mask sequences that are strongly correlated with language descriptions. Secondly, based on a reliable and high-quality keyframe, we leverage the superior performance of video object segmentation model to further enhance the quality and temporal consistency of the mask results. Our single model reaches 70.3 J &F on the Referring Youtube-VOS validation set and 63.0 on the test set. After ensemble, we achieve 64.1 on the final leaderboard, ranking 1st place on CVPR2022 Referring Youtube-VOS challenge. Code will be available at https://github.com/Zhiweihhh/cvpr2022-rvos-challenge.git.
translated by 谷歌翻译
Referring image segmentation aims to segment the target object described by a given natural language expression. Typically, referring expressions contain complex relationships between the target and its surrounding objects. The main challenge of this task is to understand the visual and linguistic content simultaneously and to find the referred object accurately among all instances in the image. Currently, the most effective way to solve the above problem is to obtain aligned multi-modal features by computing the correlation between visual and linguistic feature modalities under the supervision of the ground-truth mask. However, existing paradigms have difficulty in thoroughly understanding visual and linguistic content due to the inability to perceive information directly about surrounding objects that refer to the target. This prevents them from learning aligned multi-modal features, which leads to inaccurate segmentation. To address this issue, we present a position-aware contrastive alignment network (PCAN) to enhance the alignment of multi-modal features by guiding the interaction between vision and language through prior position information. Our PCAN consists of two modules: 1) Position Aware Module (PAM), which provides position information of all objects related to natural language descriptions, and 2) Contrastive Language Understanding Module (CLUM), which enhances multi-modal alignment by comparing the features of the referred object with those of related objects. Extensive experiments on three benchmarks demonstrate our PCAN performs favorably against the state-of-the-art methods. Our code will be made publicly available.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
The space-air-ground integrated network (SAGIN), one of the key technologies for next-generation mobile communication systems, can facilitate data transmission for users all over the world, especially in some remote areas where vast amounts of informative data are collected by Internet of remote things (IoRT) devices to support various data-driven artificial intelligence (AI) services. However, training AI models centrally with the assistance of SAGIN faces the challenges of highly constrained network topology, inefficient data transmission, and privacy issues. To tackle these challenges, we first propose a novel topology-aware federated learning framework for the SAGIN, namely Olive Branch Learning (OBL). Specifically, the IoRT devices in the ground layer leverage their private data to perform model training locally, while the air nodes in the air layer and the ring-structured low earth orbit (LEO) satellite constellation in the space layer are in charge of model aggregation (synchronization) at different scales.To further enhance communication efficiency and inference performance of OBL, an efficient Communication and Non-IID-aware Air node-Satellite Assignment (CNASA) algorithm is designed by taking the data class distribution of the air nodes as well as their geographic locations into account. Furthermore, we extend our OBL framework and CNASA algorithm to adapt to more complex multi-orbit satellite networks. We analyze the convergence of our OBL framework and conclude that the CNASA algorithm contributes to the fast convergence of the global model. Extensive experiments based on realistic datasets corroborate the superior performance of our algorithm over the benchmark policies.
translated by 谷歌翻译
Existing deep learning based HDRTV reconstruction methods assume one kind of tone mapping operators (TMOs) as the degradation procedure to synthesize SDRTV-HDRTV pairs for supervised training. In this paper, we argue that, although traditional TMOs exploit efficient dynamic range compression priors, they have several drawbacks on modeling the realistic degradation: information over-preservation, color bias and possible artifacts, making the trained reconstruction networks hard to generalize well to real-world cases. To solve this problem, we propose a learning-based data synthesis approach to learn the properties of real-world SDRTVs by integrating several tone mapping priors into both network structures and loss functions. In specific, we design a conditioned two-stream network with prior tone mapping results as a guidance to synthesize SDRTVs by both global and local transformations. To train the data synthesis network, we form a novel self-supervised content loss to constraint different aspects of the synthesized SDRTVs at regions with different brightness distributions and an adversarial loss to emphasize the details to be more realistic. To validate the effectiveness of our approach, we synthesize SDRTV-HDRTV pairs with our method and use them to train several HDRTV reconstruction networks. Then we collect two inference datasets containing both labeled and unlabeled real-world SDRTVs, respectively. Experimental results demonstrate that, the networks trained with our synthesized data generalize significantly better to these two real-world datasets than existing solutions.
translated by 谷歌翻译
现有的假新闻检测方法旨在将新闻分类为真或错误,并提供真实的解释,从而实现出色的表现。但是,他们经常根据有限的新闻报道和揭穿延误来定制手动事实检查报告的自动解决方案。如果尚未对一段新闻进行事实检查或揭穿事实,通常会在各种媒体上传播一定数量的相关原始报告,其中包含人群的智慧来验证新闻声明并解释其判决。在本文中,我们提出了一个新颖的粗到十五级别的级联证据依据(COFCED)神经网络,以根据此类原始报告来解释假新闻检测,从而减轻了对事实检查的依赖性。具体而言,我们首先使用层次结构编码器来用于Web文本表示,然后开发两个级联的选择器,以粗略至上的方式在所选的Top-K报告之上选择最可解释的句子。此外,我们构建了两个可解释的假新闻数据集,这些数据集可公开使用。实验结果表明,我们的模型显着优于最先进的基线,并从不同的评估角度产生高质量的解释。
translated by 谷歌翻译
具有高分辨率(HR)的磁共振成像(MRI)提供了更详细的信息,以进行准确的诊断和定量图像分析。尽管取得了重大进展,但大多数现有的医学图像重建网络都有两个缺陷:1)所有这些缺陷都是在黑盒原理中设计的,因此缺乏足够的解释性并进一步限制其实际应用。可解释的神经网络模型引起了重大兴趣,因为它们在处理医学图像时增强了临床实践所需的可信赖性。 2)大多数现有的SR重建方法仅使用单个对比度或使用简单的多对比度融合机制,从而忽略了对SR改进至关重要的不同对比度之间的复杂关系。为了解决这些问题,在本文中,提出了一种新颖的模型引导的可解释的深层展开网络(MGDUN),用于医学图像SR重建。模型引导的图像SR重建方法求解手动设计的目标函数以重建HR MRI。我们通过将MRI观察矩阵和显式多对比度关系矩阵考虑到末端到端优化期间,将迭代的MGDUN算法展示为新型模型引导的深层展开网络。多对比度IXI数据集和Brats 2019数据集进行了广泛的实验,证明了我们提出的模型的优势。
translated by 谷歌翻译
整合多个在线社交网络(OSN)对许多下游社交挖掘任务(例如用户偏好建模,建议和链接预测)具有重要意义。但是,不幸的是,伴随着越来越多的隐私问题,泄漏敏感用户信息。如何完全利用来自不同在线社交网络的数据,同时保存用户隐私仍然无法解决。为此,我们提出了一个跨网络的社交用户嵌入框架,即DP-Crosue,以一种隐私性的方式学习用户的全面表示。我们共同考虑具有不同隐私保证的部分调整社交网络的信息。特别是,对于每个异质社交网络,我们首先引入一个混合差异隐私概念,以捕获异构数据类型的隐私期望的变化。接下来,为了找到跨社交网络的用户链接,我们进行了无监督的基于用户嵌入的对齐方式,其中通过异质网络嵌入技术实现了用户嵌入。为了进一步增强用户嵌入,一种新颖的跨网络GCN嵌入模型旨在通过那些对齐用户跨网络传输知识。在三个现实世界数据集上进行的广泛实验表明,我们的方法对用户兴趣预测任务以及捍卫用户属性推理攻击的嵌入进行了重大改进。
translated by 谷歌翻译
作为一种有希望的隐私机器学习方法,联合学习(FL)可以使客户跨客户培训,而不会损害其机密的本地数据。但是,现有的FL方法遇到了不均分布数据的推理性能低的问题,因为它们中的大多数依赖于联合平均(FIDAVG)基于联合的聚合。通过以粗略的方式平均模型参数,FedAvg将局部模型的个体特征黯然失色,这极大地限制了FL的推理能力。更糟糕的是,在每一轮FL培训中,FedAvg向客户端向客户派遣了相同的初始本地模型,这很容易导致对最佳全局模型的局限性搜索。为了解决上述问题,本文提出了一种新颖有效的FL范式,名为FEDMR(联合模型重组)。与传统的基于FedAvg的方法不同,FEDMR的云服务器将收集到的本地型号的每一层层混合,并重组它们以实现新的模型,以供客户端培训。由于在每场FL比赛中进行了细粒度的模型重组和本地培训,FEDMR可以迅速为所有客户找出一个全球最佳模型。全面的实验结果表明,与最先进的FL方法相比,FEDMR可以显着提高推理准确性而不会引起额外的通信开销。
translated by 谷歌翻译
为了成功推荐(SR)成功,最近的作品着重于设计有效的顺序编码器,融合侧面信息以及挖掘额外的积极的自我实施信号。在每个时间步骤中对负面项目进行采样的策略较少探索。由于用户在培训过程中的兴趣和模型更新的动态,因此考虑用户的非相互作用项目的随机抽样项目作为负面的项目可能是不明智的。结果,该模型将不准确地了解用户对项目的偏好。识别信息性负面因素是具有挑战性的,因为内容的负面项目与动态变化的兴趣和模型参数相关(并且抽样过程也应该是有效的)。为此,我们建议为SR(Genni)生成负样本(项目)。根据当前SR模型对项目的学习用户偏好,在每个时间步骤中都采样了负项目。提出了有效的实施,以进一步加速生成过程,使其可扩展到大规模推荐任务。在四个公共数据集上进行的广泛实验验证了为SR提供高质量的负样本的重要性,并证明了Genni的有效性和效率。
translated by 谷歌翻译